Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = 'traffic-signs-data/train.p'
validation_file='traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [2]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

# TODO: Number of training examples
n_train = X_train.shape[0]

# TODO: Number of testing examples.
n_test =  X_test.shape[0]

# TODO: What's the shape of an traffic sign image?
image_shape =  X_train[0].shape

# TODO: How many unique classes/labels there are in the dataset.
n_classes =  len(set(y_train))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [3]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import matplotlib
# Visualizations will be shown in the notebook.
%matplotlib inline
import random
import numpy as np
def visualizeImage(X,y,clim_in=(0, 255), mode='none', y_truth=0):
    def puttext(index,text):
        plt.subplot(numrow, numcol, index)
        plt.text(0,0.5,text)
        plt.gca().set_axis_off()
        
    Colspan=10 # number of column of information. Colspan have to be >5 .
    numcol_image=16 #number of images + first column
    numcol=numcol_image+1+Colspan
    numrow=n_classes+1 #class+header
    Subplotpars=matplotlib.figure.SubplotParams(wspace=0, hspace=0)
    plt.figure(figsize=(int(numcol/2), int(numrow/2)), subplotpars=Subplotpars)

    for i in range(n_classes):
        #image
        X_class=X[y==i]
        if len(X_class)==0:
            continue
        for iImage in range(1,numcol_image):
            index = np.random.randint(0, len(X_class))
            plt.subplot(numrow, numcol, (i+1)*numcol+iImage+1)
            plt.gca().set_axis_off()
            image = X_class[index].squeeze()
            image=np.clip(image,clim_in[0],clim_in[1])
            plt.imshow(image ,clim=clim_in) 
            
        #imformation
        puttext((i+1)*numcol+1,str(i))
        if mode=='numimage':
            num_image=np.sum(y==i)
            puttext((i+1)*numcol+numcol_image+1, str(num_image))
            #bar
            plt.subplot2grid((numrow,numcol), ( (i+1), numcol_image+1), colspan=Colspan)
            num_image_rate=float(num_image)/float(len(X))
            plt.gca().add_patch( plt.Rectangle(xy=[0, 0,1], width=num_image_rate*2, height=0.8) )
            plt.gca().set_axis_off()
        elif mode=='result':
            num_image=np.sum(y==i)
            puttext((i+1)*numcol+numcol_image+1, str(num_image))
            
            num_TP=np.sum((y==i) * (y_truth==i))
            puttext((i+1)*numcol+numcol_image+2, str(num_TP))
            
            num_FN=np.sum((y!=i) * (y_truth==i))
            puttext((i+1)*numcol+numcol_image+3, str(num_FN))
            
            if num_image!=0:
                precision=float(num_TP)/float(num_image)*100.0
                puttext((i+1)*numcol+numcol_image+4, '%.1f' % precision)
            
            recall=float(num_TP)/float(num_TP+num_FN)*100.0
            puttext((i+1)*numcol+numcol_image+5, '%.1f' % recall)
            
            if num_image!=0 and num_TP!=0:
                accuracy=2.0*recall*precision/(recall+precision)
                puttext((i+1)*numcol+numcol_image+6, '%.1f' % accuracy)          
                
    #set header
    puttext(1,'Label')
    if mode=='numimage':
        puttext(numcol_image+1,'num')
    elif mode=='result':
        puttext(numcol_image+1,'num')  
        puttext(numcol_image+2,'TP')
        puttext(numcol_image+3,'FN')
        puttext(numcol_image+4,'prec.')
        puttext(numcol_image+5,'rec.')
        puttext(numcol_image+6,'acc.')
In [4]:
visualizeImage(X_train,y_train,mode='numimage')

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

In [5]:
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
#normalization
def preprocess2(X_data):
    n_train = X_data.shape[0]
    imsize=32
    margin=8
    X_out=np.zeros((n_train,imsize,imsize,3),dtype=np.float32)
    for i in range(n_train):
        X_out[i]=X_data[i]
        minX=X_out[i,margin:imsize-margin,margin:imsize-margin,:].min()
        maxX=255.0
        while margin>0:
            X_maxcolor=np.max(X_out[i,margin:imsize-margin,margin:imsize-margin,:],axis=2)
            Xmaxc_sorted=np.sort(X_maxcolor, axis=None) 
            maxX=Xmaxc_sorted[np.ceil(len(Xmaxc_sorted)*0.9).astype(np.int)-1] #10% of ROI suturated
            #avoid 0 divide
            if maxX>minX:
                break
            else:
                print(i)
                margin-=2
        X_out[i]=(X_out[i]-minX)/(maxX-minX)-0.5

        #X_out[i]=(X_out[i]-X_out[i,8:23,8:23,:].min())/(X_out[i,8:23,8:23,:].max()-X_out[i,8:23,8:23,:].min())
        #
    return X_out
In [6]:
X_train_pp=preprocess2(X_train)
visualizeImage(X_train_pp+0.5,y_train,[0,1])
In [7]:
#RGB to YUV and unsharp filtering
def preprocess(X_data):
    n_train = X_data.shape[0]
    X_data_YUV=np.zeros((n_train,32,32,3))
    kernel_sharpen= np.array([[-1,-1,-1,-1,-1],
                             [-1,2,2,2,-1],
                             [-1,2,8,2,-1],
                             [-1,2,2,2,-1],
                             [-1,-1,-1,-1,-1]]) / 8.0
    for i in range(n_train):
        X_data_YUV[i]=cv2.cvtColor((X_data[i]/255.0).astype(np.float32), cv2.COLOR_RGB2YCrCb)
        X_data_YUV[i] = cv2.filter2D(X_data_YUV[i], -1, kernel_sharpen)
        X_data_YUV[i,:,:,0]=(X_data_YUV[i,:,:,0]-X_data_YUV[i,8:24,8:24,1].mean())
        X_data_YUV[i,:,:,1]=np.clip((X_data_YUV[i,:,:,1]-0.5)*1.3,-0.5,0.5)
        X_data_YUV[i,:,:,2]=np.clip((X_data_YUV[i,:,:,2]-0.5)*1.3,-0.5,0.5)
    return X_data_YUV
X_train_pp=preprocess(X_train)
In [8]:
visualizeImage(X_train_pp+0.5,y_train,[0,1])

Augmentation

I balanced number of images per classes

In [9]:
def imagetransform(img):
    random_move=(np.random.randint(1,4,size=(3,2))* ((np.random.rand(3,2)>0.5)*2-1) ).astype(np.float32)
    imsize=img.shape
    pts1 = np.float32([[3,3],[imsize[0]-4,2],[int(imsize[0]/2),imsize[1]-4]])
    pts2 = pts1+random_move
    
    M = cv2.getAffineTransform(pts1,pts2)
    dst = cv2.warpAffine(img,M,(imsize[0],imsize[1]))
    #dst=(dst.astype(np.float32)+np.random.rand(imsize[0],imsize[1],imsize[2])*((np.max(dst)-np.min(dst))/10))
    #dst=np.clip(dst,0,255).astype(np.uint8)
    return dst
def count_image_per_classes(y_in):
    count_out=np.zeros(n_classes)
    for i in range(n_classes):
        count_out[i]=sum(i==y_in)
    return count_out
def augmentation(X_in, y_in):
    numimage_perclasses=count_image_per_classes(y_in)
    maxnumimage=max(numimage_perclasses)
    for i in range(n_classes):
        scale_factor=1+np.floor(maxnumimage/numimage_perclasses[i]/2).astype(int)
        X_class=X_in[i==y_in]
        X_class_Aug=np.zeros((len(X_class)*scale_factor,32,32,3))
        for iImage in range(len(X_class)):
            for iAug in range(scale_factor):
                if iAug==0:
                    X_class_Aug[iImage*scale_factor]=X_class[iImage]
                else:
                    X_class_Aug[iAug+iImage*scale_factor]=imagetransform(X_class[iImage])
        if i==0:
            X_out=X_class_Aug
            y_out=np.ones(scale_factor*len(X_class))*i
        else:
            X_out=np.concatenate((X_out, X_class_Aug), axis=0)
            y_out=np.concatenate((y_out, np.ones(scale_factor*len(X_class))*i))
    return [X_out,y_out]

idx=np.random.randint(1000)
print(idx)
img=X_train[idx]
dst=imagetransform(img)

plt.figure
plt.subplot(1,2,1)
plt.imshow(img)
plt.subplot(1,2,2)
plt.imshow(dst)
655
Out[9]:
<matplotlib.image.AxesImage at 0x1df473e94e0>
In [10]:
[X_train_aug,y_train_aug]=augmentation(X_train, y_train)
X_train_pp_aug=preprocess2(X_train_aug)
In [11]:
visualizeImage(X_train_pp_aug+0.5,y_train_aug, [0,1],mode='numimage')

Model Architecture

In [12]:
### Define your architecture here.
### Feel free to use as many code c7ells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
#tf.constant(0.1, shape=)
def LeNet(x,keep_prob):    
    global  conv2, conv1
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1

    #  Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
    conv1_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 3, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(6))
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
    
    #  Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    #  Activation.
    conv1 = tf.nn.relu(conv1)
    
    #  Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    #  Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    #  Activation.
    conv2 = tf.nn.relu(conv2)
    
    #  Flatten. Input = 5x5x16. Output = 400.
    fc0   = flatten(conv2)

    #  Layer 3: Fully Connected. Input = 400. Output = 120.
    #fc1_W=tf.get_variable("fc1_W", shape=[400,120],initializer=tf.contrib.layers.xavier_initializer())
    fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(120))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b

    #  Activation.
    fc1    = tf.nn.relu(fc1)
    fc1 = tf.nn.dropout(fc1, keep_prob)
    
    #  Layer 4: Fully Connected. Input = 120. Output = 84.
    #fc2_W=tf.get_variable("fc2_W", shape=[120,84],initializer=tf.contrib.layers.xavier_initializer())
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(84))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b

    #  Activation.
    fc2    = tf.nn.relu(fc2)
    fc2 = tf.nn.dropout(fc2, keep_prob)
    
    #  Layer 5: Fully Connected. Input = 84. Output = 43.
    #fc3_W=tf.get_variable("fc3_W", shape=[84,43],initializer=tf.contrib.layers.xavier_initializer())
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(43))
    logits = tf.matmul(fc2, fc3_W) + fc3_b
    

    
    return logits

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

In [13]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, 43)
one_hot_y0 = tf.one_hot(y, 5)
In [14]:
logits = LeNet(x,keep_prob)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y,logits=logits )
loss_operation = tf.reduce_mean(cross_entropy)
rate=0.002
training_operation = tf.train.AdamOptimizer(learning_rate = rate).minimize(loss_operation)

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()

def evaluate(X_data, y_data,BATCH_SIZE=200):
    X_data_pp=preprocess2(X_data)
    num_examples = len(y_data)   
    total_accuracy = 0
    total_loss = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data_pp[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy, loss = sess.run([accuracy_operation,loss_operation], feed_dict={x: batch_x, y: batch_y, keep_prob:1.0})
        total_accuracy += (accuracy * len(batch_y))
        total_loss += (loss * len(batch_y))
    return [total_accuracy / num_examples , total_loss /  num_examples ]

from sklearn.utils import shuffle
def train(X_train_pp,y_train_pp,rate, EPOCHS, BATCH_SIZE):
    n_train=len(y_train)
    training_operation = tf.train.AdamOptimizer(learning_rate = rate).minimize(loss_operation)

    training_accuracy_history = np.zeros(EPOCHS)
    validation_accuracy_history = np.zeros(EPOCHS)
    training_loss_history = np.zeros(EPOCHS)
    validation_loss_history = np.zeros(EPOCHS)
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        print("Training...")
        print()
        for i in range(EPOCHS):
            X_train_pp, y_train_pp = shuffle(X_train_pp, y_train_pp)
            for offset in range(0, n_train, BATCH_SIZE):
                end = offset + BATCH_SIZE
                batch_x, batch_y = X_train_pp[offset:end], y_train_pp[offset:end]
                sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.5})
            [training_accuracy, training_loss]=evaluate(X_train, y_train, BATCH_SIZE)        
            [validation_accuracy, validation_loss] = evaluate(X_valid, y_valid, BATCH_SIZE)
            print("EPOCH {} ...".format(i+1))
            print("Training Accuracy = {:.3f}".format(training_accuracy))
            print("Validation Accuracy = {:.3f}".format(validation_accuracy))
            print("Training Loss = {:.3f}".format(training_loss))
            print("Validation Loss = {:.3f}".format(validation_loss))
            print()
            
            training_accuracy_history[i] = training_accuracy
            validation_accuracy_history[i] = validation_accuracy
            training_loss_history[i] =  training_loss
            validation_loss_history[i] = validation_loss

        saver.save(sess, './lenet')
        print("Model saved")
    
    loss_plot = plt.subplot(2,1,1)
    loss_plot.set_title('Loss')
    loss_plot.plot(training_loss_history, 'r', label='Training Loss')
    loss_plot.plot(validation_loss_history, 'b', label='Validation Loss')
    loss_plot.set_xlim([0, EPOCHS])
    loss_plot.legend(loc=2)
    acc_plot = plt.subplot(2,1,2)
    acc_plot.set_title('Accuracy')
    acc_plot.plot(training_accuracy_history,'r', label='Training Accuracy')
    acc_plot.plot(validation_accuracy_history, 'b', label='Validation Accuracy')
    acc_plot.set_ylim([0, 1.0])
    acc_plot.set_xlim([0, EPOCHS])
    acc_plot.legend(loc=4)
    plt.tight_layout()
    plt.show()
In [411]:
EPOCHS = 100
BATCH_SIZE = 200
rate=0.0008

X_train_pp=X_train_pp_aug
y_train_pp=y_train_aug


train(X_train_pp,y_train_pp, rate, EPOCHS, BATCH_SIZE)
Training...

EPOCH 1 ...
Training Accuracy = 0.513
Validation Accuracy = 0.458
Training Loss = 1.731
Validation Loss = 1.805

EPOCH 2 ...
Training Accuracy = 0.737
Validation Accuracy = 0.695
Training Loss = 1.001
Validation Loss = 1.098

EPOCH 3 ...
Training Accuracy = 0.827
Validation Accuracy = 0.784
Training Loss = 0.672
Validation Loss = 0.764

EPOCH 4 ...
Training Accuracy = 0.877
Validation Accuracy = 0.824
Training Loss = 0.505
Validation Loss = 0.612

EPOCH 5 ...
Training Accuracy = 0.910
Validation Accuracy = 0.860
Training Loss = 0.397
Validation Loss = 0.503

EPOCH 6 ...
Training Accuracy = 0.927
Validation Accuracy = 0.867
Training Loss = 0.320
Validation Loss = 0.439

EPOCH 7 ...
Training Accuracy = 0.941
Validation Accuracy = 0.896
Training Loss = 0.254
Validation Loss = 0.363

EPOCH 8 ...
Training Accuracy = 0.946
Validation Accuracy = 0.895
Training Loss = 0.233
Validation Loss = 0.359

EPOCH 9 ...
Training Accuracy = 0.957
Validation Accuracy = 0.914
Training Loss = 0.187
Validation Loss = 0.309

EPOCH 10 ...
Training Accuracy = 0.961
Validation Accuracy = 0.915
Training Loss = 0.164
Validation Loss = 0.292

EPOCH 11 ...
Training Accuracy = 0.966
Validation Accuracy = 0.924
Training Loss = 0.142
Validation Loss = 0.261

EPOCH 12 ...
Training Accuracy = 0.968
Validation Accuracy = 0.919
Training Loss = 0.131
Validation Loss = 0.270

EPOCH 13 ...
Training Accuracy = 0.972
Validation Accuracy = 0.928
Training Loss = 0.117
Validation Loss = 0.242

EPOCH 14 ...
Training Accuracy = 0.974
Validation Accuracy = 0.931
Training Loss = 0.108
Validation Loss = 0.234

EPOCH 15 ...
Training Accuracy = 0.976
Validation Accuracy = 0.934
Training Loss = 0.097
Validation Loss = 0.220

EPOCH 16 ...
Training Accuracy = 0.978
Validation Accuracy = 0.937
Training Loss = 0.093
Validation Loss = 0.211

EPOCH 17 ...
Training Accuracy = 0.980
Validation Accuracy = 0.939
Training Loss = 0.079
Validation Loss = 0.199

EPOCH 18 ...
Training Accuracy = 0.979
Validation Accuracy = 0.934
Training Loss = 0.078
Validation Loss = 0.209

EPOCH 19 ...
Training Accuracy = 0.982
Validation Accuracy = 0.939
Training Loss = 0.070
Validation Loss = 0.207

EPOCH 20 ...
Training Accuracy = 0.984
Validation Accuracy = 0.938
Training Loss = 0.062
Validation Loss = 0.203

EPOCH 21 ...
Training Accuracy = 0.982
Validation Accuracy = 0.939
Training Loss = 0.068
Validation Loss = 0.190

EPOCH 22 ...
Training Accuracy = 0.984
Validation Accuracy = 0.944
Training Loss = 0.060
Validation Loss = 0.184

EPOCH 23 ...
Training Accuracy = 0.986
Validation Accuracy = 0.946
Training Loss = 0.056
Validation Loss = 0.189

EPOCH 24 ...
Training Accuracy = 0.986
Validation Accuracy = 0.942
Training Loss = 0.055
Validation Loss = 0.182

EPOCH 25 ...
Training Accuracy = 0.986
Validation Accuracy = 0.941
Training Loss = 0.052
Validation Loss = 0.200

EPOCH 26 ...
Training Accuracy = 0.987
Validation Accuracy = 0.940
Training Loss = 0.049
Validation Loss = 0.196

EPOCH 27 ...
Training Accuracy = 0.987
Validation Accuracy = 0.942
Training Loss = 0.048
Validation Loss = 0.196

EPOCH 28 ...
Training Accuracy = 0.988
Validation Accuracy = 0.942
Training Loss = 0.046
Validation Loss = 0.196

EPOCH 29 ...
Training Accuracy = 0.989
Validation Accuracy = 0.941
Training Loss = 0.042
Validation Loss = 0.195

EPOCH 30 ...
Training Accuracy = 0.988
Validation Accuracy = 0.942
Training Loss = 0.042
Validation Loss = 0.192

EPOCH 31 ...
Training Accuracy = 0.990
Validation Accuracy = 0.945
Training Loss = 0.037
Validation Loss = 0.171

EPOCH 32 ...
Training Accuracy = 0.991
Validation Accuracy = 0.949
Training Loss = 0.036
Validation Loss = 0.165

EPOCH 33 ...
Training Accuracy = 0.991
Validation Accuracy = 0.952
Training Loss = 0.034
Validation Loss = 0.160

EPOCH 34 ...
Training Accuracy = 0.991
Validation Accuracy = 0.954
Training Loss = 0.035
Validation Loss = 0.167

EPOCH 35 ...
Training Accuracy = 0.991
Validation Accuracy = 0.953
Training Loss = 0.033
Validation Loss = 0.179

EPOCH 36 ...
Training Accuracy = 0.992
Validation Accuracy = 0.949
Training Loss = 0.033
Validation Loss = 0.176

EPOCH 37 ...
Training Accuracy = 0.992
Validation Accuracy = 0.955
Training Loss = 0.031
Validation Loss = 0.171

EPOCH 38 ...
Training Accuracy = 0.992
Validation Accuracy = 0.953
Training Loss = 0.031
Validation Loss = 0.168

EPOCH 39 ...
Training Accuracy = 0.993
Validation Accuracy = 0.955
Training Loss = 0.030
Validation Loss = 0.165

EPOCH 40 ...
Training Accuracy = 0.994
Validation Accuracy = 0.956
Training Loss = 0.027
Validation Loss = 0.153

EPOCH 41 ...
Training Accuracy = 0.993
Validation Accuracy = 0.952
Training Loss = 0.030
Validation Loss = 0.182

EPOCH 42 ...
Training Accuracy = 0.994
Validation Accuracy = 0.950
Training Loss = 0.025
Validation Loss = 0.168

EPOCH 43 ...
Training Accuracy = 0.994
Validation Accuracy = 0.952
Training Loss = 0.025
Validation Loss = 0.171

EPOCH 44 ...
Training Accuracy = 0.994
Validation Accuracy = 0.948
Training Loss = 0.024
Validation Loss = 0.172

EPOCH 45 ...
Training Accuracy = 0.993
Validation Accuracy = 0.954
Training Loss = 0.027
Validation Loss = 0.170

EPOCH 46 ...
Training Accuracy = 0.992
Validation Accuracy = 0.956
Training Loss = 0.027
Validation Loss = 0.160

EPOCH 47 ...
Training Accuracy = 0.994
Validation Accuracy = 0.953
Training Loss = 0.024
Validation Loss = 0.169

EPOCH 48 ...
Training Accuracy = 0.995
Validation Accuracy = 0.955
Training Loss = 0.023
Validation Loss = 0.149

EPOCH 49 ...
Training Accuracy = 0.995
Validation Accuracy = 0.954
Training Loss = 0.021
Validation Loss = 0.157

EPOCH 50 ...
Training Accuracy = 0.995
Validation Accuracy = 0.960
Training Loss = 0.021
Validation Loss = 0.142

EPOCH 51 ...
Training Accuracy = 0.994
Validation Accuracy = 0.950
Training Loss = 0.022
Validation Loss = 0.170

EPOCH 52 ...
Training Accuracy = 0.994
Validation Accuracy = 0.959
Training Loss = 0.021
Validation Loss = 0.149

EPOCH 53 ...
Training Accuracy = 0.994
Validation Accuracy = 0.951
Training Loss = 0.022
Validation Loss = 0.169

EPOCH 54 ...
Training Accuracy = 0.995
Validation Accuracy = 0.954
Training Loss = 0.020
Validation Loss = 0.171

EPOCH 55 ...
Training Accuracy = 0.995
Validation Accuracy = 0.960
Training Loss = 0.018
Validation Loss = 0.153

EPOCH 56 ...
Training Accuracy = 0.995
Validation Accuracy = 0.956
Training Loss = 0.019
Validation Loss = 0.163

EPOCH 57 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958
Training Loss = 0.018
Validation Loss = 0.161

EPOCH 58 ...
Training Accuracy = 0.995
Validation Accuracy = 0.957
Training Loss = 0.017
Validation Loss = 0.159

EPOCH 59 ...
Training Accuracy = 0.995
Validation Accuracy = 0.958
Training Loss = 0.017
Validation Loss = 0.167

EPOCH 60 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959
Training Loss = 0.016
Validation Loss = 0.163

EPOCH 61 ...
Training Accuracy = 0.996
Validation Accuracy = 0.958
Training Loss = 0.016
Validation Loss = 0.164

EPOCH 62 ...
Training Accuracy = 0.995
Validation Accuracy = 0.962
Training Loss = 0.017
Validation Loss = 0.149

EPOCH 63 ...
Training Accuracy = 0.995
Validation Accuracy = 0.955
Training Loss = 0.017
Validation Loss = 0.165

EPOCH 64 ...
Training Accuracy = 0.996
Validation Accuracy = 0.954
Training Loss = 0.016
Validation Loss = 0.172

EPOCH 65 ...
Training Accuracy = 0.995
Validation Accuracy = 0.961
Training Loss = 0.015
Validation Loss = 0.149

EPOCH 66 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959
Training Loss = 0.015
Validation Loss = 0.161

EPOCH 67 ...
Training Accuracy = 0.996
Validation Accuracy = 0.961
Training Loss = 0.014
Validation Loss = 0.157

EPOCH 68 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960
Training Loss = 0.014
Validation Loss = 0.147

EPOCH 69 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959
Training Loss = 0.015
Validation Loss = 0.148

EPOCH 70 ...
Training Accuracy = 0.996
Validation Accuracy = 0.959
Training Loss = 0.015
Validation Loss = 0.150

EPOCH 71 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961
Training Loss = 0.013
Validation Loss = 0.155

EPOCH 72 ...
Training Accuracy = 0.996
Validation Accuracy = 0.956
Training Loss = 0.013
Validation Loss = 0.167

EPOCH 73 ...
Training Accuracy = 0.997
Validation Accuracy = 0.963
Training Loss = 0.012
Validation Loss = 0.151

EPOCH 74 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959
Training Loss = 0.012
Validation Loss = 0.159

EPOCH 75 ...
Training Accuracy = 0.996
Validation Accuracy = 0.961
Training Loss = 0.011
Validation Loss = 0.144

EPOCH 76 ...
Training Accuracy = 0.995
Validation Accuracy = 0.959
Training Loss = 0.016
Validation Loss = 0.143

EPOCH 77 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961
Training Loss = 0.012
Validation Loss = 0.143

EPOCH 78 ...
Training Accuracy = 0.996
Validation Accuracy = 0.962
Training Loss = 0.014
Validation Loss = 0.148

EPOCH 79 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960
Training Loss = 0.012
Validation Loss = 0.149

EPOCH 80 ...
Training Accuracy = 0.997
Validation Accuracy = 0.957
Training Loss = 0.012
Validation Loss = 0.167

EPOCH 81 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959
Training Loss = 0.011
Validation Loss = 0.158

EPOCH 82 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959
Training Loss = 0.011
Validation Loss = 0.160

EPOCH 83 ...
Training Accuracy = 0.997
Validation Accuracy = 0.958
Training Loss = 0.011
Validation Loss = 0.167

EPOCH 84 ...
Training Accuracy = 0.997
Validation Accuracy = 0.960
Training Loss = 0.011
Validation Loss = 0.156

EPOCH 85 ...
Training Accuracy = 0.998
Validation Accuracy = 0.961
Training Loss = 0.010
Validation Loss = 0.144

EPOCH 86 ...
Training Accuracy = 0.996
Validation Accuracy = 0.960
Training Loss = 0.012
Validation Loss = 0.156

EPOCH 87 ...
Training Accuracy = 0.997
Validation Accuracy = 0.962
Training Loss = 0.009
Validation Loss = 0.160

EPOCH 88 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959
Training Loss = 0.011
Validation Loss = 0.164

EPOCH 89 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961
Training Loss = 0.011
Validation Loss = 0.153

EPOCH 90 ...
Training Accuracy = 0.997
Validation Accuracy = 0.958
Training Loss = 0.011
Validation Loss = 0.156

EPOCH 91 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961
Training Loss = 0.009
Validation Loss = 0.159

EPOCH 92 ...
Training Accuracy = 0.997
Validation Accuracy = 0.959
Training Loss = 0.010
Validation Loss = 0.165

EPOCH 93 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964
Training Loss = 0.009
Validation Loss = 0.151

EPOCH 94 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964
Training Loss = 0.009
Validation Loss = 0.147

EPOCH 95 ...
Training Accuracy = 0.997
Validation Accuracy = 0.961
Training Loss = 0.010
Validation Loss = 0.155

EPOCH 96 ...
Training Accuracy = 0.997
Validation Accuracy = 0.965
Training Loss = 0.011
Validation Loss = 0.146

EPOCH 97 ...
Training Accuracy = 0.997
Validation Accuracy = 0.963
Training Loss = 0.009
Validation Loss = 0.147

EPOCH 98 ...
Training Accuracy = 0.998
Validation Accuracy = 0.964
Training Loss = 0.009
Validation Loss = 0.146

EPOCH 99 ...
Training Accuracy = 0.998
Validation Accuracy = 0.965
Training Loss = 0.008
Validation Loss = 0.146

EPOCH 100 ...
Training Accuracy = 0.998
Validation Accuracy = 0.965
Training Loss = 0.008
Validation Loss = 0.142

Model saved
In [412]:
softmax = tf.nn.softmax(logits)

with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    test_accuracy = evaluate(X_test, y_test)[0]
    print("Test Accuracy = {:.3f}".format(test_accuracy))

    prediction_prob=sess.run(softmax, {x: X_test_pp, keep_prob:1.0 })
prediction=np.argmax(prediction_prob,1)
Test Accuracy = 0.946
In [413]:
X_test_pp=preprocess2(X_test)
visualizeImage(X_test_pp+0.5,prediction,[0,1],y_truth=y_test,mode='result')

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [414]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
### Load the images and plot them here.
### Feel free to use as many code cells as needed.

test_files=['STOP_normal.jpg',   #normal
            'STOP_graffiti.jpg',   #normal
            'STOP_snow.jpg',   #occluded with snow
            'NoEntry_normal.jpg',   #small
            'NoEntry_lettees.jpg',   #not in class
            ]
num=len(test_files)
print('num=%d'%num)
imagesize=32
X_new = np.zeros(shape=(num,imagesize,imagesize,3),dtype=np.float32)
Y_new = np.array([14,14,14,17,17])
for n in range(num):
    img = cv2.imread('./newimages/' + test_files[n], 1)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB).astype(np.float32)    
    img_resize = cv2.resize(img, (32, 32))
    X_new[n]=img_resize
    plt.subplot(1,num,n+1)
    plt.imshow(img_resize.astype(np.uint8))

X_new_pp=preprocess2(X_new)
num=5

Predict the Sign Type for Each Image

In [416]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
softmax = tf.nn.softmax(logits)
prediction_model=tf.argmax(logits, 1)
sess2 = tf.get_default_session()
with tf.Session() as sess2:
    saver.restore(sess2, tf.train.latest_checkpoint('.'))  
    pred_new=sess2.run(prediction_model, {x: X_new_pp,  keep_prob:1.0})
    print(pred_new)
    
[14 14 28 17 17]

Analyze Performance

In [419]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))

    test_accuracy = evaluate(X_new_pp, Y_new)[0]
    print("Test Accuracy = {:.3f}".format(test_accuracy))
Test Accuracy = 0.800

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [421]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    a=sess.run(softmax, {x: X_new_pp, keep_prob:1.0})
    top5=sess.run(tf.nn.top_k(tf.constant(a),k=5))
    print(top5)
TopKV2(values=array([[  1.00000000e+00,   2.68676831e-10,   8.00821076e-11,
          1.07643408e-12,   1.81243692e-13],
       [  9.93025899e-01,   5.58323041e-03,   3.75851203e-04,
          3.08767339e-04,   2.41822345e-04],
       [  3.10719669e-01,   1.72849730e-01,   1.12014562e-01,
          8.03531036e-02,   6.73390403e-02],
       [  1.00000000e+00,   3.42431328e-09,   2.19034796e-17,
          1.45306517e-17,   1.32297345e-20],
       [  1.00000000e+00,   3.64426072e-08,   8.40841563e-10,
          2.95670120e-11,   3.29269913e-15]], dtype=float32), indices=array([[14,  1,  0, 38, 29],
       [14, 29, 38, 25, 22],
       [28,  1,  3,  0,  6],
       [17, 14,  9, 16, 12],
       [17, 14,  9, 16, 41]]))

Step 4: Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [428]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input, keep_prob:0.5})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
In [430]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    outputFeatureMap(X_train_pp, conv1)
In [429]:
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    outputFeatureMap(X_train_pp, conv2)

Question 9

Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images

Answer:
In 1st layer, edge shapes on different color are learned.
In 2nd layer, rough shapes and line directions are learned.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.